Friday, January 31, 2025

The DeepSeek Artificial Intelligence Breakthrough: What Is Really Going On?

This week’s AI news item was the most powerful from that surging field in years.  What on earth happened?

The same day the story broke came out “What to Know About DeepSeek and How It Is Upending A.I.” (Cade Metz, The New York Times, January 27th).  Immediately, “tech stocks tumbled.  Giant companies like Meta and Nvidia faced a barrage of questions about their future.  Tech executives took to social media to proclaim their fears.  And it was all because of a little-known Chinese artificial intelligence start-up called DeepSeek.”  Word spread that that company had “created a very powerful A.I. model with far less money than many A.I. experts thought possible.”  Veteran technology reporter Metz posed and answered ten questions.  “What is DeepSeek?... A start-up founded and owned by the Chinese stock trading firm High-Flyer,” which “by 2021… had acquired thousands of computer chips from the U.S. chipmaker Nvidia,” and “on Jan. 10… released its first free chatbot app.”  “Why did the stock market react to it now?”  Because DeepSeek released a research paper claiming that while “the world’s top companies typically train their chatbots with supercomputers that use as many as 16,000 chips or more,” its own “engineers said they needed only about 2,000.”  “Why is that important?”  Cost.  “How did DeepSeek make its tech with fewer A.I. chips?”  By “spreading… data analysis across several specialized A.I. models” instead of doing it all together.  “Is DeepSeek’s tech as good as systems from OpenAI and Google?”  Yes, according to “standard benchmark tests.”  “U.S. tech giants are building data centers with specialized A.I. chips.  Does this still matter, given what DeepSeek has done?”  Yes, the additional chips will still be useful for future and peripheral AI purposes.  “Hasn’t the United States limited the number of Nvidia chips sold to China?”  Yes, and that has “forced researchers in China to get creative.”  “Does DeepSeek’s tech mean that China is now ahead of the United States in A.I.?”  No, not in general.

The next day, but probably only hours later, saw “DeepSeek’s Rise:  How a Chinese Start-Up Went From Stock Trader to A.I. Star” (Meaghan Tobin, Paul Mozur, and Alexandra Stevenson, The New York Times).  It was “a business using A.I. to make bets in the Chinese stock market” that “pursued a new opportunity” and “zeroed in on research” on “advanced A.I.”  “Do China’s A.I. Advances Mean U.S. Technology Controls Have Failed?” (Ana Swanson and Meaghan Tobin, The New York Times, January 28th)?  No, because such limitations had not yet gone into effect when DeepSeek bought their chips.  In contrast, “DeepSeek will not be able to legally purchase the newest generation of A.I. chips that Nvidia is rolling out right now.” 

We got two opposite views on where these stocks are going.  Per Adria Cimino in The Motley Fool on January 28th, “Nvidia predicted to soar in 2025 thanks to this one thing.”  That reason is innovation, as it may release two new architecture systems before this year is out.  Alternatively, in “I Study Financial Markets.  The Nvidia Rout Is Only the Start,” in the New York Times that day, Mihir A. Desai claimed that “Big Tech is eating itself alive with its component companies throwing more and more cash at investments in one another that are most likely to generate less and less of a return,” and “investors see these companies as a safe bet and have thus stopped demanding significant immediate returns,” resulting in a “massive influx of cheap money.”

What’s going on here?  I have three thoughts that may help us understand.

First, the runup of Nvidia, and other AI stocks to a lesser extent, has long been paper-thin.  Despite all the money involved, only a few people in moderate-sized companies can have huge and sudden impacts.

Second, training AI systems, as well as how AI decides what to say, is a black box.  Even top researchers fail to comprehend exactly how it works, and therefore what it requires.  It is hardly a mature science.

Third, despite the appearance of transparency provided by such things as formal papers, the Chinese government and Chinese society are closed.  We cannot verify, for example, the provenance and originality of DeepSeek’s output, or how much in resources its development actually required.  If the announcement caused $1 trillion in paper assets to disappear, it could be that $500 billion, or more, was due to exaggeration or downright fraud.  Although the amounts of money are unprecedented, this would not be the first time – or the 20th – that the stock market has been shaken by faulty information.  The technicians at the AI companies will scrutinize DeepSeek’s products and will draw conclusions, but they will not be finished today or tomorrow.  In the meantime, the rest of us need to look before leaping.

Friday, January 24, 2025

The First Batch of Trump’s Executive Orders: How Good Will They Be for Jobs and the Economy?

After his inauguration, President Donald Trump let little grass grow under his feet.  Soon enough to make inclusion in “Tracking Trump’s executive orders:  What he’s signed so far” (Avery Lotz, Axios.com), released around 9am Eastern Time on Wednesday or 45 hours into his presidency, he had released 29 of them.  The two seeking to end birthright citizenship, ensconced in our Constitution, were stillborn, and in fact were blocked by a judge yesterday. 

Otherwise, what effect will his acts have on our employment and prosperity? 

In assessing their probable effects, I took “jobs” to mean the number of full-time positions for all wanting them, regardless of pay.  For “the economy” I figured their probable impact on a family near the 50th percentile of American affluence, perhaps a family of four with $70,000 annual household income and an ordinary house or apartment, and access to reasonable lifestyle choices.  I judged the remaining 27 orders either positive, neutral, or negative on both, without attempting to assess the degree, so each would end up with two scores, +1, 0, or -1 on employment and the same on prosperity.  Here are the values I assigned, within the nine groups Lotz put them into.

There were four in the “immigration” section.  To Trump’s naming ““certain international cartels” and organizations” as terrorist, I assigned no effect on jobs or the economy.  On suspending refugee resettlements I gave a plus on jobs, since more would stay available, but a minus on the economy, as it would lose relatively low-paid workers.  For similar reasons I attributed the same to the flights canceled for those from Afghanistan, and to the end of “parole programs.”  Overall, this section ended at +3 for jobs and -3 for the economy. 

The two orders Trump issued under the “Remain in Mexico” heading, canceling border crossing appointments and empowering “officials to “repeal, repatriate or remove any alien engaged in the invasion” of the southern border,” drew my same assessment, making the section total +2 for jobs and -2 for the economy, for a running total of +5 and -5.

The next, “energy and environment,” broke the pattern.  I considered freeing up Alaska coastal areas for energy production positive for both jobs and the economy.  For mirror-image reasons, though, his stopping “approvals, rights of way, permits, leases, or loans for onshore or offshore wind projects” would be negative for both, meaning no net effect here, with the overall totals of +5 and -5 staying the same.

Trump had three executive orders pertaining to climate-related policies.  Withdrawing from the Paris Climate broke even.  Allowing “application reviews for liquefied natural gas export projects, which were paused” precipitated a gain for jobs but no change for prosperity.  His revoking “a 2021 Biden executive order that set a goal for 50% of US vehicle sales to be electric by 2030,” meaning that liquid fuel vehicles would replace some future ones, had no net effect.  Overall, this section added +1 for jobs and no change to the economy, getting us to +6 and -5.

Five of the executive orders were “targeting DEI and transgender Americans.”  None of the five had any effect on employment.  Since prosperity is not only what tangible things people can get but the range of other choices they have and can implement, I assigned detractions for his proclamation that “sexes are not changeable and are grounded in fundamental and incontrovertible reality” and his eliminating policies that “widened sex discrimination protections to include sexual orientation and gender identity.”  Those were offset by prosperity improvements, to my way of thinking, from ordering the Federal Aviation Administration “”to immediately return to non-discriminatory, merit-based hiring,”” and the “compliance investigations of publicly traded corporations” among others.  With the executive order moving toward banning transsexuals in the military having no required action and therefore no impact, the section did the same:  jobs still +6 so far, the economy still -5.

Two more were in the “other executive orders affecting federal workers.”  The first, mandating “a full-time return to in-office work for federal employees” and instituting “a hiring freeze on government positions,” got a minus for jobs on the second part and nothing on the first.  The other, “which could make it easier to fire civil servants deemed disloyal,” caused a drop in prosperity and no change to jobs.  That makes this section minus 1 for each, bringing us to +5 and -6.

There were also two in the “health executive orders” group.  Leaving the World Health Organization meant a cut in prosperity, with no effect on employment.  Rescinding “a 2022 Biden order to lower the cost of prescription drugs” I evaluated as the same.  The running totals here stand at +5 and -8.

The final category, covering “TikTok extension, DOGE and more executive orders,” included seven more.  “Ensuring government agencies do not “unconstitutionally abridge the free speech of any American citizen”” was worth a gain only to prosperity.  “Ordering a review of trade practices and agreements,” and revoking various security clearances, had no effect on either.  “Formally establishing the Department of Government Efficiency,” which could turn out to be the best Trump change, was worth increases in both jobs and the economy.  The remaining three, “suspending the TikTok ban for 75 days,” saying that “federal buildings should “respect regional, traditional, and classical architectural heritage”” and the silly “renaming,” if people actually follow that, of Denali and the Gulf of Mexico, had no effects.  That brings the final total up to a gain of 6 for jobs, and a loss of 6 to the economy.

Is declining prosperity at the expense of jobs, as strange as that sounds, what we can expect from Trump’s current term?  Probably it is.  The issue on which he campaigned, and with which he may be most closely associated, tariffs, would also clearly help the number of American jobs but hurt American prosperity, as people not employed in those areas will be paying more.  Although we do not know whether these orders will be sustained, their general, overall direction is clear.  That will probably be matched by Trump’s legislative initiatives. 

Accordingly, we will probably be looking at lower unemployment but higher prices.  That combination means, disappointingly to many, that interest rates should not go down; if they do, we would be risking both a superheated jobs market and even more resurgent inflation.  So, although Trump has been wishing for lower interest rates, it will be his policies that prevent that.  Such is ironic – but many other things about the next four years will also be.

Friday, January 17, 2025

Four Corporate Practices That May Surprise You – And Displease You

What have those pesky managers at our largest companies have been up to?

In “Is Your Driving Being Secretly Scored?” (The New York Times, June 9th), author Kashmir Hill asked “You know you have a credit score.  Did you know that you might also have a driving score?”  That, also called “telematics,” which “reflects the safety of your driving habits – how often you slam on the brakes, speed, look at your phone or drive late at night,” is supplied to insurance companies from car manufacturers, or “from apps that drivers already have on their phones,” which can include Life360, MyRadar, and GasBuddy.  These tools often have their extra capabilities explained in legal-looking fine print, often unspecifically, such as “we may collect third party data and reports.”  Yet auto insurers have long used personal data, so this is nothing totally new.  In most cases, it can be shut down, or you can choose to do as I did – leave it alone knowing that relaying boring driving habits can only reduce your premiums.  Those more privacy-concerned can dig out this article for much more – it printed to 11 pages.

Another thing I hadn’t seen before, with its absence glaringly obvious, was from Erica Lamberg in Fox Business on July 16th: “Hot career trend ‘hushed hybrid’ has managers choosing the employees who have flex work arrangements.”  Back in the day, and since then as far as I can see, employers did not seem to care about productivity or responsible behavior when deciding whether to allow workers to stay at home, but, despite official policies banning that, here we have, secretly, better employees being given some privileges.  “Hushed hybrid” can be defined as “managers overruling, dismissing or choosing not to enforce a company’s return-to-office policies.”  Although it is high time that firms used individual assessments, formal or not, to decide who can work remotely, the problem is that those not chosen may feel deceived about the true policy.  It would be better if management could do this openly – if there are no unions involved, it seems like they should be able to.

A few years ago we got publicity about different customers being charged different prices, even when all aspects of the transactions involved were identical or nearly so.  It may be expanding with new developments, as “FTC probes AI-powered ‘surveillance pricing’ at Mastercard, JPMorgan Chase, McKinsey and others” (Eric Revell, Fox Business, July 23rd).  The new method uses “AI and other technology” combined with “personal information… such as their location, demographics, credit history, and browsing or shopping history” “to analyze consumer data to help set price targets for products and services.”  The Federal Trade Commission, along with masses of people buying things, did not like that, and it may be banned.

Workers’ long-time frenemy found the spotlight in “So, Human Resources Is Making You Miserable?” (David Segal, The New York Times, August 3rd).  HR, which “bugs a lot of employees and managers… seems to have more detractors than ever since the pandemic began,” when it “began to administer rules about remote work and pay transparency, programs to improve diversity, equity and inclusion and everything else that has rattled and changed the workplace in the last four years.”  Those in that department are themselves “aggravated or bummed out,” often because “office behavior post-Covid has become notably less civil,” resulting in them “being called in far more often to referee disputes.”  Employment site LinkedIn found three years ago that “H.R. had the highest turnover rate of any job it tracked.”  With more and more areas causing problems for them, its staff members often call it a “thankless job.” 

Perhaps fuller and consistently honest disclosure of practices, my largest wish for HR during my corporate career, would help their reputation – but that would be neither quick nor easy.  And so it goes with the other three situations.  People are more willing to accept a fair shake when they know the rules, even if they are not as favorable as they would like.  That is the moral of these stories.

Friday, January 10, 2025

It Looked Like a Positive Jobs Report Month, Despite AJSN Showing Latent Demand Up Almost 400,000 – But Was It?

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to be favorable, with published estimates I saw of the net new nonfarm payroll positions with increases of 165,000, 165,000 and 170,000.  Its 256,000 well exceeded those, and most of the other statistics followed.  Seasonally adjusted and unadjusted unemployment fell 0.1% and 0.2% to 4.1% and 3.8%.  The number of unemployed was off 200,000 to 6.9 million, with 100,000 fewer of those out for 27 weeks or longer, reaching 1.6 million.  The count of people working part-time for economic reasons, or thus far unsuccessfully seeking full-time work while holding on to lower-hours propositions, dropped another 100,000 to 4.4 million.  The two measures showing how common it is for Americans to be working or officially jobless, the employment-population ratio and the labor force participation rate, gained 0.2% and stayed the same, ending at 60.0% and 62.5%.  The unadjusted count of employed lost 162,000 to 161,294,000.  Average private nonfarm hourly payroll wages reached $35.69, 8 cents, or slightly less than inflation, more than the previous month. 

The American Job Shortage Number, the metric showing how many more positions could be quickly filled if all knew that getting one would be little more than an ordinary daily errand, worsened with a 359,000 increase, as follows:

Almost all of the AJSN’s increase is probably illusory, as the Census Bureau greatly increased their view of the American population from November 16th to December 16th, the dates of record for the AJSN, and so the non-civilian et al. category above, which takes the difference between those included in our population and in any employment category, rose almost 3.4 million.  Accordingly, the share in the AJSN flew up almost as much as the statistic increased.  Otherwise, the fall in those officially unemployed was essentially offset by a gain in those wanting work but not looking for it during the previous year, and there were no other large changes.

Compared with a year earlier, the AJSN grew just over 200,000, with the largest changes from a lower number of expatriates (subtracting 700,000 from the AJSN), more people officially jobless (adding 490,000), more in armed services or unaccounted for (adding 349,000), and more discouraged (adding 127,000). 

So what’s the real story here?  It was good, but tempered by 683,000 net drop in the labor force, which along with 432,000 more claiming no interest, means that too many, perhaps the same people previously stepping back into the job-search world without finding what they wanted, are departing.  Will that be a problem in 2025?  We will continue tracking it, which we need to know before being enthusiastic about improvements mostly attributable to a smaller labor force.  In the meantime, though, the turtle took a fair-sized step forward.

Friday, January 3, 2025

Artificial Intelligence: The Energy, Computing Power, and Geographical Changes It Will Need, and Why That is Good

One of several large concerns about AI is getting enough resources for it, namely electricity and data center capacity.  Here’s a fast look at seven articles showing it can’t make it on what’s out there now.

“How Amazon goes nuclear” (Dan DeFrancesco, Business Insider, October 25th) started with “What’s bigger than tech’s ambitious plans for generative AI?  The amount of energy needed to power it.”  It may call for a large, controversial source, as “Amazon has led a $500 million financing round for a company developing modular nuclear reactors.”  Google has started similar endeavors, and “should these data center efforts continue to struggle, companies’ big bets on generative AI could also falter.”

Otherwise, “AI’s leaders puzzle over energy question” (Marissa Newman, Bloomberg Tech Daily, October 30th).  That puzzling, happening at Dubai’s Future Investment Initiative which was “mostly centered on AI,” included how to deal with a possible 40% next-decade rise in electricity use – not that much more for AI, but overall.  The Saudi Arabian Oil Company’s CEO “made his pitch” on data centers there using natural gas at relatively low cost.  Others saw merit.

Stateside, “Exxon Plans to Sell Electricity to Data Centers” (Rebecca F. Elliott, The New York Times, December 11th).  That will also be powered by natural gas, generated at a large power plant, possibly completed by late 2029, of undisclosed cost and location, and will be a new line of external business for ExxonMobil.

With these new facilities, it is fair to consider “How A.I. Could Reshape the Economic Geography of America” (Steve Lohr, The New York Times, December 26th).  Cities “well positioned to use A.I. to become more productive” include “Chattanooga and other once-struggling cities in the Midwest, Mid-Atlantic and South,” such as Dayton, Scranton, Savannah, and Greenville, each of which has “an educated work force, affordable housing and workers who are mostly in occupations and industries less likely to be replaced or disrupted by A.I.”  A variety of other businesses, many connected with trucking and freight, stand to benefit.

What is “The 19th-Century Technology That Threatens A.I.” (Azeem Azhar, still The New York Times, December 28th)?  It’s electricity, on which “America has a long way to go.”  In Virginia, “a hotbed for data centers,” those wanting “to connect to the grid” could face a seven-year wait, and “some counties in the state are introducing limits” on them.  Our country, per the author, has “a patchwork of conflicting regulations, outdated structures and misaligned investment incentives” slowing or stopping infrastructure building, along with “a skills gap in labor shortages in construction and engineering, a complex permitting process trapping projects in years of bureaucratic review across multiple agencies,” “high costs of capital,” and “local opposition.”  Overall, per Azhar, “if the United States truly wants to secure its leadership in A.I., it must equally invest in the energy systems that power it.”

In answer to a question arising naturally from the first source above, Bradley Hoppenstein, writing in CNBC, told us “Why tech giants such as Microsoft, Amazon, Google and Meta are betting big on nuclear power” (December 28th).  It’s because of “the energy demands of their data centers and AI models” that “nuclear power has a lot of benefits,” including having no carbon, its providing “tremendous economic impact,” and “can be always on and run all the time.”  However, we haven’t seen anywhere near as much opposition from anti-nuclear groups as we probably will. 

On the front page of Sunday, December 29th’s New York Times business section was “Data Centers Are Fueling a New Gold Rush” (Karen Weise).  They named installations being built in less populated areas of Washington state, resulting in “electricians flocking to regions around the country that, at least for now, have power to spare” and “a housing crunch” almost certain to create more jobs.

The electricians in Washington were matter-of-fact about the boom not lasting forever.  When work runs out there, they hope to find it elsewhere, and probably will, even in the same industry.  No matter what happens in the long run with artificial intelligence, it is building economies now.  Don’t expect that to stop this year – or the next.

Friday, December 27, 2024

What Artificial Intelligence Users Are Doing with It – And Shouldn’t Be

It’s not only technical capabilities that give AI its meaning, but also what’s being done with the software itself.  What have we seen?

One result is that “Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs” (Sharon Adarlo, Futurism.com, August 16th).  Now that AI has shown itself useful for mass job applications, and for cover letters as well as for identifying suitable openings, it’s no surprise that hopefuls are using it for resumes too, and the results aren’t as favorable.  Without sufficient editing, “many of them are badly written and generic sounding,” with the language “clunky and generic” that fails “to show the candidate’s personality, their passions,” or “their story.”  The piece blames this problem on AI itself, but all recruiters need to do is to disregard applications with resumes showing signs of AI prefabricating.  Since resumes are so short, it is not time-consuming to carefully revise those initially written by AI, and failure to do that can understandably be thought of as showing what people would do on the job.

In something that might have appealed to me in my early teens, “Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram” (Matt Burgess, Wired.com, October 15th).  The article credited “deepfake expert” Henry Ajder as finding a “telegram bot” that “had been used to generate more than 100,000 explicit photos – including those of children.”  Now there are 50 of them, with “more than 4 million “monthly users”” combined.  The problem here is that there is no hope of stopping people from creating nude deepfakes, and therefore not enough reason for making them illegal.  Those depicting children, when passed to others, can be subject to the laws covering child pornography, but adults will need to understand that anyone can create such things from pictures of them clothed or even only of their faces, so we will all need to realize that such images are likely not real.  Unless people copyright pictures of themselves, it is time to accept that counterfeits will be created.

Another problem with fake AI creations was the subject of “Florida mother sues AI company over allegedly causing death of teen son” (Christina Shaw, Fox Business, October 24th).  In this, Character.AI was accused of “targeting” a 14-year-old boy “with anthropomorphic, hypersexualized, and frighteningly realistic experiences” involving conversations described as “text-based romantic and sexual interactions.”  As a result of a chatbot that “misrepresented itself as a real person,” and then, when he became “noticeably withdrawn” and “expressed thoughts of suicide,” the chatbot “repeatedly encouraged him to do so” – after which he did.  Here we have a problem with allowing children access to such features.  Companies will need to stop that, whether it is convenient or not.

How about this one: “Two Students Created Face Recognition Glasses.  It Wasn’t Hard.” (Kashmir Hill, The New York Times, October 24th).  A Harvard undergraduate student fashioned a pair that “relied on widely available technologies, including Meta glasses, which livestream video to Instagram… Face detection software, which captures faces that appear on the livestream… a face search engine called PimEyes, which finds sites on the internet where a person’s face appears,” “a ChatGPT-like tool that was able to parse the results from PimEyes to suggest a person’s name and occupation” and other data.  The creator, at local train stations, found that it “worked on about a third of the people they tested it on,” giving recipients the experience of being identified, along with their work information and accomplishments.  It turned out that Meta had already “developed an early prototype,” but did not pursue its release “because of legal and ethical concerns.”  It is hard to blame any of the companies providing the products above – indeed, for example, after the publicity this event received, “PimEyes removed the students’ access… because they had uploaded photos of people without their consent” – and, if AI is one of them, there will be many people combining capabilities to invade privacy by discovering information.  This, conceptually, seems totally unviable to stop.

Meanwhile, “Office workers fear that AI use makes them seem lazy” (Patrick Kulp, Tech Brew, November 12th).  A Slack report invoked the word “stigma,” saying there was one of those for using AI at work, and that was hurting “workforce adoption,” which slowed this year from gaining “six points in a single quarter” to 1%, ending at 33% “in the last five months.”  A major issue was that employees had insufficient guidance on when they were allowed to use AI, which many had brought to work themselves.  A strange situation, and one that clearly calls for management involvement.

Finally, there were “10 things you should never tell an AI chatbot” (Kim Komando, Fox News, December 19th).  They are “passwords or login credentials,” “your name, address or phone number” (likely to be passed on), “sensitive financial information,” “medical or health data” as “AI isn’t HIPAA-compliant,” “asking for illegal advice” (may get you “flagged” if nothing else), “hate speech or harmful content” (likewise), confidential work or business info,” “security question answers,” “explicit content” which could also “get you banned”), and “other people’s personal info.”  Overall, “don’t tell a chatbot anything you wouldn’t want made public.”  As AI interfaces get cozier and cuddlier, it will become easier to overshare to them, but that is more dangerous than ever.

My proposed solutions above may not be acceptable forever, and are subject to laws.  Perhaps this will long be a problem when dealing with AI – that conceptually sound ways of handling appearing issues may clash with real life.  That is a challenge – but, as with so many other aspects of artificial intelligence, we can learn to handle it effectively.

Friday, December 20, 2024

Artificial Intelligence: A Visit to the Catastrophic Problems Café

One thing almost everyone creating, using, coding, regulating, or just plain writing or thinking about AI feels duty-bound to do is to consider the chance that the technology will destroy us.  I haven’t done that in print yet, so here I go.

In his broad-based On the Edge:  The Art of Risking Everything (Penguin Press, 2024), author Nate Silver devoted a 56-page chapter, “Termination,” to the chance that AI will obliterate humanity or almost so.  He said there was a wide range of what he called “p(doom)” opinions, or estimations of the chances of such an outcome.  He considered more precise definitions of doom – for example, does it mean that “every single member of the human species and all biological life on Earth dies,” or could it be only “the destruction of humanity’s long-term potential” or even “something where humans are kept in check” with “the people making the big calls” being “a coalition of AI systems”?  The averages Silver found for “domain experts” on AI itself, with it defined as “all but five thousand humans ceasing to exist by 2100,” were 8.8% by 2100, and 0.7% from “generalists who had historically been accurate when making other probabilistic predictions.”  The highest expert p(doom) named was “20 to 30 percent”, but there are certainly larger ones out there.

How would the technology do its dirty work?  One way was in “A.I. May Save Us, or May Construct Viruses to Kill Us” (The New York Times, July 27th).  Author Nicholas Kristof said that “for less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.”  That could happen through anything from bugs murdering indiscriminately all the way to something that “might be possible,” using DNA knowledge to create a product tailored to “kill or incapacitate” one specific person.  Kristof is a journalist, not a technician, but as much of AI thinking is conceptual now, his concerns are valid.

Another columnist in the New York Times soon thereafter came out with “Many People Fear A.I.  They Shouldn’t” (David Brooks, July 31st).  His view was that “many fears about A.I. are based on an underestimation of the human mind” – he cited “scholar” Michael Ignatieff as saying “what we do” was not algorithmic, but “a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”  He also wrote that while engineers claim to be “building machines that think like people,” per neuroscientists “that would be a neat trick, because we don’t know how people think.”

The next month, Greg Norman looked at the problem posed by Kristof above, in “Experts warn AI could generate ‘major epidemics or even pandemics’ – but how soon?” (Fox News, August 28th).  Stemming from “a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University,” exposure to “substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields” creates a worry.  Although “today’s AI models likely do not “substantially contribute” to biological risks,” the chance that “essential ingredients to create highly concerning advanced biological models may already exist or soon will” could cause problems.

All of this depends, though, on what AI is allowed to access.  It is and will be able to formulate detailed deadly plans, but what from there?  A Stanford undergraduate, John A. Phillips, in 1976 wrote and submitted a term paper giving detailed plans for assembling an atomic bomb, with all information from readily available public sources.  Although one expert said it would have had about an even chance of detonating, it was never built.  That, for me, is why my p(doom) is very low, less than a tenth of one percent.  There is no indication that AI models can build things by themselves in the physical world. 

So far, we are doing well at containing AI.  As for the future, Silver said that, if given a chance to “permanently and irrevocably” stop its progress, he would not, as, ultimately, “civilization needs to learn to live with the technology we’ve built, even if that means committing ourselves to a better set of values and institutions.”  We can deal with artificial intelligence – a vastly more difficult challenge we face is dealing with ourselves.  That’s the last word.  With that, it’s time to leave the café.